Collective Learning Generally Overcomes Local Optima and Converges to the Global Optimum
نویسندگان
چکیده
Local minima represent a major problem for neural network learning procedures. In this article we present a new procedure, collective learning, that leads to improved global convergence. We have tested our procedure on several neural networks and on the multimodal functions proposed by De Jong and Rastrigin. In our tests we have reached a success ratio of 100 %. In addition we give a few remarks on the theorie of collective learning and give an estimate of the convergence behavior as well. Moreover, our procedure is very fast.
منابع مشابه
Data Clustering Based on an Efficient Hybrid of K-Harmonic Means, PSO and GA
Clustering is one of the most commonly techniques in Data Mining. Kmeans is one of the most popular clustering techniques due to its simplicity and efficiency. However, it is sensitive to initialization and easily trapped in local optima. K-harmonic means clustering solves the problem of initialization using a built-in boosting function, but it is suffering from running into local optima. Parti...
متن کاملA Differential Evolution and Spatial Distribution based Local Search for Training Fuzzy Wavelet Neural Network
Abstract Many parameter-tuning algorithms have been proposed for training Fuzzy Wavelet Neural Networks (FWNNs). Absence of appropriate structure, convergence to local optima and low speed in learning algorithms are deficiencies of FWNNs in previous studies. In this paper, a Memetic Algorithm (MA) is introduced to train FWNN for addressing aforementioned learning lacks. Differential Evolution...
متن کاملThe influence of search components and problem characteristics in early life cycle class modelling
This paper examines the factors affecting the quality of solution found by meta-heuristic search when optimising object-oriented software class models. From the algorithmic perspective, we examine the effect of encoding, choice of components such as the global search heuristic, and various means of incorporating problemand instance-specific information. We also consider the effect of problem ch...
متن کاملSelf-adaptive mutations may lead to premature convergence
Self-adaptive mutations are known to endow evolutionary algorithms (EAs) with the ability of locating local optima quickly and accurately, whereas it was unknown whether these local optima are finally global optima provided that the EA runs long enough. In order to answer this question it is assumed that the -EA with self-adaptation is located in the vicinity of a local solution with objective ...
متن کاملLearning Deep Models: Critical Points and Local Openness
With the increasing interest in deeper understanding of the loss surface of many non-convex deep models, this paper presents a unifying framework to study the local/global optima equivalence of the optimization problems arising from training of such non-convex models. Using the local openness property of the underlying training models, we provide simple sufficient conditions under which any loc...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1995